![]() CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE
专利摘要:
The invention relates to a camera block (14) adapted to be embedded in a drone (10) for mapping a terrain (16), comprising a camera (18) capable of capturing successive images of portions of the terrain overflown by the drone. The camera block comprises means for memorizing captured images, information comparison means of the portion of terrain overflown visible via the camera with at least one information of at least the previous image captured to determine the recovery rate the portion of the terrain overflown with respect to at least said previous captured image, and means for sending a command to the camera for capturing an image, as soon as the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. 公开号:FR3038482A1 申请号:FR1556143 申请日:2015-06-30 公开日:2017-01-06 发明作者:Eng Hon Sron 申请人:Parrot SA; IPC主号:
专利说明:
The invention relates to drones for mapping a terrain but more particularly a camera block suitable for being embedded in a drone. The AR.Drone 2.0, the Bebop Drone of Parrot SA, Paris, France or the eBee of SenseFly SA, Switzerland, are typical examples of drones. They are equipped with a series of sensors (accelerometers, 3-axis gyrometers, altimeters) and at least one camera. This camera is for example a vertical aiming camera capturing an image of the terrain overflown. These drones are provided with a motor or several rotors driven by respective engines able to be controlled in a differentiated manner to control the drone attitude and speed. The invention relates more specifically to a camera block adapted to be embedded in a drone for mapping a terrain, including crop fields. These camera blocks are for example equipped with a multi-spectral photo sensor to measure the reflectance of crops, that is to say the amount of light reflected by the leaves, to obtain information on the state of photosynthesis. To map a terrain, these drones roam the terrain and perform a succession of image captures. Once the images have been captured, these images are processed to produce a 2D or 3D map of the area overflown, in particular to have the characteristics of the cultures observed. These drones are controlled during the overflight of the terrain to be mapped via a control device or by the loading of a trajectory that the drone travels autonomously. Mapping a terrain is achieved by the successive triggering of the camera equipping the drone. For the realization of these successive captures, it is known to establish a particular communication between the drone and the camera, the drone determining the successive times for the triggering of the camera and the realization of the image captures. This solution has the disadvantage of being dependent on the communication protocol between the cameras and the drone imposed by the drone manufacturer. To date, no communication standard being defined, the cameras are then developed specifically for each drone manufacturer. Another known solution is to realize during the flight of the drone, an image capture of the terrain overflown at regular time interval, including all "X" seconds. In order to be able to create a complete map of the terrain overflown, a large amount of images must be captured. This solution therefore has the disadvantage of having to memorize and process a significant volume of images. Also, more images than necessary for this mapping are generated, which has the disadvantage of requiring a large storage capacity and heavy processing to build the map of the area overflown. The aim of the present invention is to remedy these drawbacks by proposing a solution allowing a complete mapping of the terrain to be mapped while minimizing the number of image acquisitions to be made. For this purpose, the invention proposes a camera block suitable for being embedded in a drone, for mapping a terrain, comprising a camera capable of capturing successive images of portions of the terrain overflown by the drone. Characteristically, it is provided that the camera block comprises means for memorizing captured images, information comparison means of the portion of terrain overflown visible via said camera with at least one information of at least the image previous capture to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous image captured, and means for sending a command to the camera for capturing an image, as soon as the rate the portion of land overflown is less than or equal to the predetermined recovery rate. According to a particular embodiment, the camera is a vertical aiming camera pointing downwards. In a particular embodiment, said information corresponds to at least one capture context information of a portion of land. In particular, the camera block may comprise means for generating at least one context information for capturing a portion of land overflown, and the memory means are able to store the context information associated with said captured images, and the comparison means are able to compare at least one context information of the overflown portion of terrain generated by the context information generating means with at least one stored context information of at least the previous image captured for determine the recovery rate of the portion of the terrain overflown in relation to at least the previous image captured. According to one embodiment, the context information comprises geolocation information and / or the speed of movement of the camera block and / or the angle of view of the camera and / or the orientation of the camera and / or the distance between the camera and the ground. According to a particular embodiment, the camera block further comprises a device for estimating the altitude of said camera block and means for memorizing the initial distance between the camera and the ground determined by said device for estimating the camera. altitude before takeoff of the drone, and the distance between the camera and the ground of the portion of terrain overflown is determined by the difference between the initial distance and the distance determined by the device for estimating the altitude of said camera block during the overflight of the portion of the ground. According to another particular embodiment, the camera block further comprises an image analysis device of the camera capable of producing a horizontal speed signal derived from an analysis of the speed of displacement of the portion of the terrain captured. from one image to the next, and the distance between the camera and the ground is further dependent on said horizontal speed signal. According to a particular embodiment, the predetermined recovery rate is at most 85% and preferably at most 50%. The invention also proposes a drone for mapping a terrain comprising a camera block according to one of the embodiments described above. According to another aspect, the invention proposes a method for managing the capture of images by a camera block that can be embedded in a drone to map a terrain, the camera block comprising a camera able to capture successive images of portions of the terrain. overflown by the drone. Typically, the method comprises the following steps: a step of comparing information of the portion of terrain overflown visible via the camera with at least one information of at least the previous image captured to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous captured image, and a step of sending a command to the camera for capturing an image, as soon as the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. According to a particular embodiment, said information corresponds to at least one context information for capturing a portion of terrain. According to another embodiment, the method further comprises a step of generating at least one context information of the overflown portion of terrain, and the comparing step compares said at least one context information of the portion of the terrain. overflown with at least one stored context information of at least the previous captured image to determine the overlap rate of the portion of the terrain overflown with respect to at least said captured previous image. Advantageously, the method further comprises a step of storing the captured image and its context information generated during the step of generating at least one context information of the overflown portion of terrain. According to one embodiment, the context information comprises geolocation information and / or the speed of movement of the camera block and / or the angle of view of the camera and / or the orientation of the camera and / or the distance between the camera and the ground. According to the embodiment, the method further comprises, before take-off of the drone, a step of determining the initial distance between the camera and the ground by estimating the altitude of said camera block, and during the flight of the drone, at least a step of determining the distance between the camera and the ground of the portion of terrain overflown by difference between the initial distance and the estimated distance of the altitude of said camera block. According to an alternative embodiment, the method further comprises a step of analyzing images of the camera to produce a horizontal velocity signal, derived from an analysis of the displacement of the captured portion of the terrain from an image to the next, and a step of determining the distance between the camera and the ground according to said horizontal speed signal. In a particular embodiment of the method, the predetermined recovery rate is at most 85% and preferably at most 50%. 0 An exemplary implementation of the present invention will now be described with reference to the accompanying drawings. Figure 1 illustrates a drone and a terrain to map. FIG. 2 illustrates an example of a trajectory that the drone must travel to map the terrain and an illustration of a succession of captured images and the next image to be captured, according to the invention. FIG. 3 illustrates a structure of the camera block according to the invention embedded for example in a drone. Figure 4 shows the course of a terrain with a steep gradient. FIGS. 5a and 5b show the captured image N-1 and the captured image N, and FIG. 5c illustrates the superposition of the N-1 and N images. Figure 6 is a diagram illustrating various parameters for determining a context of an image. FIG. 7 is an image capture management method by a camera block according to the invention. 0 We will now describe an exemplary embodiment of the invention. In Figure 1, reference numeral 10 generally denotes a drone. According to the example illustrated in Figure 1, it is a flying wing such as the eBee model of SenseFly SA, Switzerland. This drone has a motor 12. According to another exemplary embodiment, the drone is a quadricopter such as the Bebop Drone model of Parrot SA, Paris, France. This drone has four coplanar rotors whose engines are controlled independently by an integrated system of navigation and attitude control. The drone 10 is provided with an onboard camera block 14 for obtaining a set of images of the terrain to be mapped 16, terrain that is overflown by the drone. According to the invention, the camera unit 14 is autonomous, in particular, it is able to operate independently of the drone. In other words, the camera block does not use any information from the sensors integrated into the drone. In this way, no means of communication is expected between the drone and the camera block, the latter is therefore able to be installed in any drone. To do this, the drone 10 comprises a cavity intended to receive the camera unit 14. According to the invention, the camera unit 14 comprises a camera 18, for example a high-definition wide-angle camera, with a resolution of 12 Mpixel or more (20 or 40 Mpixel), typically in CMOS technology with a resolution of 1920 x 1080 pixels suitable for capture successive images of portions of the terrain 16 overflown by the drone. These images are for example RGB (Red - Green - Blue) images in all the colors of the spectrum. Field portion is a part of the overflown terrain visible by the camera of the onboard camera block 14. According to a particular embodiment, the camera block 14 is vertically aimed downward. This camera unit 14 can be used further to evaluate the speed of the drone relative to the ground. According to an exemplary embodiment, the drone adapted to embark the camera block 14 is provided with inertial sensors 44 (accelerometers and gyrometers) for measuring with a certain accuracy the angular velocities and attitude angles of the drone, that is to say -describe the angles of Euler (pitch φ, roll Θ and yaw ψ) describing the inclination of the drone with respect to a horizontal plane of a fixed terrestrial reference UVW, it being understood that the two longitudinal and transverse components of the horizontal velocity are intimately related to the inclination along the two respective axes of pitch and roll. According to a particular embodiment, the camera unit 14 is also provided with at least one inertial sensor described above. According to a particular embodiment, the drone 10 is controlled by a remote remote control device such as a multimedia tablet or telephone with touch screen and integrated accelerometers, for example a smartphone type iPhone (registered trademark) or other, or a tablet type iPad (registered trademark) or other. This is a standard device, unmodified if not the loading of a specific application software to control the control of the drone 10. According to this embodiment, the user controls in real time the movement of the drone 10 via the remote control unit. According to another embodiment, the user defines via a remote control device, a route to be made then sends the route information to the drone so that it performs this course. The remote control device communicates with the drone 10 via bidirectional data exchange over wireless LAN type Wi-Fi (IEEE 802.11) or Bluetooth (registered trademarks). According to the invention, in order to be able to create a 2D or 3D map of a terrain to be mapped, in particular of an agricultural crop field, which is exploitable and of very good quality, the successive images captured by the camera block embedded in the drone must have between them a significant area of recovery. In other words, each point of the field must be captured by several images. FIG. 2 illustrates a terrain to be mapped 16 which is traversed by a drone along a path 20. A succession of images captured by the camera block 14 are represented, each image having a significant recovery rate with the preceding image. This recovery rate must be at least 50% and preferably at least 85%. The recovery rate is for example the number of pixels in common between at least two images. In dotted line in FIG. 2 is illustrated the following image to be captured 24 by the camera unit 14, this image having a significant recovery rate with the previous image in the direction of movement of the drone and with another captured image. To do this, as shown in FIG. 3, the drone 10 comprises a camera unit 14 according to the invention. In addition, the drone may include means for receiving a flight control 40 according to a determined path and flight control means 42 along said path. The camera unit 14 according to the invention furthermore comprises means 32 for storing captured images, means 34 for comparing the information of the portion of terrain overflown that is visible via the camera 18 with at least one piece of information of at least the previous image captured to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous image captured, and means for sending a command 36 to the camera 18 for capturing an image as soon as the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. Thus, in accordance with the invention, the camera unit 14 compares information relating to the portion of terrain overflown, for example a view of the portion of terrain obtained by the camera 18 of the camera unit 14 with at least one image information. previous captured, for example, the previous image stored. This comparison makes it possible to determine the size of the overlap zone between at least the previous image captured and the portion of terrain overflown and visible by the camera, thus defining the recovery rate. The information relating to a portion of land overflown is for example a low resolution image or a high resolution image of the portion of land, or any other information relating to the portion of land. In particular, the information may correspond to at least one context information capturing a portion of terrain. According to a particular embodiment, the camera unit 14 further comprises, as illustrated in FIG. 3, means for generating at least one context information 30 for capturing a portion of terrain. More precisely, according to one embodiment, the storage means 32 are able to memorize the context information associated with the said captured images, and the comparison means 34 are able to compare at least one context information of the portion of the overflown terrain generated by the context information generating means 30 with at least one stored context information of at least the previous captured image to determine the recovery rate of the portion of the terrain overflown relative to at least 1 previous image captured According to the invention, in order to allow a construction of the map of good quality terrain while minimizing the number of image captures, the predetermined recovery rate is at least 85% and preferably at least 55%. %. Such a recovery rate between at least two successive images makes it possible to have several images having captured the same point of the terrain to be mapped. Each captured image is stored in the storage means 32 with at least one capture context information. The means for generating a context information of the camera block are capable of generating context information such as for example: a geolocation information obtained by a geolocation system or GPS device integrated in the camera unit 14 and / or the speed of movement of the camera block 14 and / or the angle of view of the camera and / or the orientation of the camera and / or the distance h between the camera and the ground. These means 30 for generating a context information generate context information from, for example, information transmitted by one or more sensors 46 integrated into the camera unit 14, as illustrated in FIG. According to another embodiment, these means for generating a context information generate context information from determined information via the captured images. The context information used and stored during the mapping may be dependent on the configuration of the terrain to map. In the case, for example, of a terrain to be relatively flat, that is to say, having a low level of elevation, geolocation information can be used as sole context information. According to this example, the comparison means 34 of the camera block 14 compare the geolocation information of at least the previous image captured and stored with the geolocation information generated by the geolocation system integrated in the camera block 14. comparison means 34 define a recovery rate between the previous image captured and the portion of land seen by the camera. If the recovery rate of the portion of the terrain overflown determined by the comparison means is less than or equal to the predetermined recovery rate, then the means for sending a command 36 will send a command to the camera for the capture of the picture. This captured image will be stored by the storage means 32 with its context information generated by the means 30. The context information in such a terrain configuration may also correspond to the distance h between the camera and the ground. In another example, when the terrain has elevations, in particular significant elevations, the context information of the images must comprise a plurality of data so as to be able to determine a real recovery ratio between at least the previous image captured and stored and the new portion of land overflown. Figure 4 illustrates a terrain 16 having a high altitude. As seen previously, in such a situation, a plurality of context information must be stored during the capture of an image in particular in order to be able to determine the next image to be captured as soon as the actual recovery rate is less than or equal to pre-determined recovery rate. In such a context, the context information must include, in particular, the speed of movement of the camera unit 14, the viewing angle of the camera 14, the orientation of the camera 14 and the distance h between the camera and the ground. The orientation of the camera 14 is for example determined from the information transmitted by at least one of the following sensors 46: gyro-meter, accelerometer or magnetometer embedded in the camera block 14. The distance h between the camera and the ground can be determined according to different methods of implementation, the method used being determined in particular according to the configuration of the ground to be mapped. When the terrain to be mapped is relatively flat, that is to say that the height of the terrain is negligible then the method of calculating the distance h between the camera and the ground includes a preliminary step of determining this distance h, called initial distance, made before takeoff of the drone. This initial distance h between the camera and the ground is memorized via storage means in the camera block 14, this distance h is determined by a device 38 for estimating the altitude of the camera block 14 embedded in the camera block such that illustrated in Figure 3. This device 38 comprises for example an altitude estimator system from the measurements of a barometric sensor and an ultrasonic sensor as described in particular in document EP 2 644 240 in the name of Par-rot SA. Then, this method comprises a step of determining the distance between the camera and the ground of the land portion overflown by realizing the difference between the initial distance stored and the distance determined between the camera and the ground of the portion of the terrain overflown, also called camera distance - current ground, by the device 38 for estimating the altitude of the camera block 14. However, when the relief of the terrain to be mapped is important, a second method of determining the distance between the camera and the ground is preferred. This second method comprises a step of analyzing images of the camera to produce a horizontal speed signal, derived from an analysis of the speed of displacement of the portion of terrain captured from one image to the next, this speed of displacement being determined for example in pixels. To do this, the camera unit 14 further comprises an image analysis device of the camera capable of producing a horizontal velocity signal, derived from an analysis of the displacement of the portion of land captured from an image to the next. FIGS. 5a and 5b illustrate two successive images N-1 and N and FIG. 5c illustrate the superposition of the images N-1 and N. From these two images, it is possible to determine the speed of movement of the camera block in pixels. Vpix represented by the arrow 50 in Figure 5c. Figure 6 illustrates the different parameters used to determine the distance between the camera and the ground. The distance h between the camera and the ground is then determined as follows: w being the width of the portion of the terrain visible by the vertical camera FOV being the angle of view with Wpix being the pixel number of the camera with vertical aiming Thus, we deduce that the distance h between the camera and the ground is determined according to the equation: The image capture management method by a camera block adapted to be embedded in a drone 10 for mapping a terrain 16 according to the invention is now illustrated in FIG. 7, the camera unit 14 comprising a camera 18 capable of capturing images. successive portions of the terrain overflown by the drone. This image capture management method by a camera block 14 for mapping a terrain 16 comprises in particular the following steps: an information comparison step 72 of the overflown portion of terrain visible via the camera 18 with at least one piece of information at least the previous captured image to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous image captured and - a step of sending a command 76 to the camera to capture the capture of an image, as soon as the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. Thus, according to the invention, the method performs a comparison of information relating to the portion of the terrain overflown, for example a view of the terrain portion with at least one information on the previous image captured, for example, the previous image stored. As seen above, the information relating to a portion of land overflown is for example a low resolution image or a high resolution image of the portion of land, or any other information relating to the portion of land. In particular, the information may correspond to at least one context information capturing a portion of terrain. According to a particular embodiment of said method, it comprises a step of generating 70 at least one context information of the portion of land overflown. This step makes it possible to identify the context of the camera block 14 during the overflight of the terrain, for example the geolocation of the camera block and / or the speed of movement of the camera block and / or the angle of view of the camera and / or the camera. orientation of the camera and / or the distance between the camera and the ground. Step 70 is followed by a comparison step 72 of said at least one context information of the portion of terrain overflown with at least one context information stored, in particular by means of storage 32 of at least the image previous sensed to determine the recovery rate of the portion of the terrain overflown at least with respect to said previous captured image. This step makes it possible to indirectly compare the image of the portion of terrain that is being overflown by the drone 10 with the image captured previously, or even with other captured images as well. Then this step determines the recovery rate of the portion of land being overflown with at least the previous image captured. After determining the recovery rate, the method continues in the comparison step 74 of the recovery rate of the portion of the terrain overflown with a predetermined recovery rate. Indeed, in order to determine from one or more images captured and stored, the next image to be captured, it must be determined if the new positioning of the camera block following its movement to discover a new portion of the land to map while having a predetermined recovery rate with one or more images already captured and stored. If in step 74 the overlay coverage rate is greater than the predetermined recovery rate, then the process proceeds to step 70 by generating at least one new context information. of the new portion of land overflown. In the opposite case, that is to say, the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate, then step 74 continues in step 76 of sending a command to the camera to capture an image. Step 76 is followed by a step 78 of storing the captured image and its context information generated during step 70 of at least one context information of the overflown portion of terrain. Step 78 is then followed by step 70 previously described in order to determine the next image to be captured. These different steps are executed until the image captures of the entire terrain to be mapped.
权利要求:
Claims (16) [1" id="c-fr-0001] 1. Camera block (14) adapted to be embedded in a drone (10), for mapping a terrain (16), comprising a camera (18) capable of capturing successive images of portions of the terrain overflown by the drone, characterized in that it comprises - storage means (32) for captured images, means for comparing (34) information of the portion of land overflown visible via said camera with at least one information of at least the previous image captured to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous image captured, and - means for sending a command (36) to the camera to capture an image, as soon as possible. the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. [2" id="c-fr-0002] 2. Camera block according to the preceding claim, characterized in that the comparison means make a comparison with at least one capture context information of a portion of land. [3" id="c-fr-0003] 3. Camera block according to the preceding claim, characterized in that it comprises means for generating (30) at least one capture context information of a portion of land overflown, and in that: - the means of storage (32) are able to store the context information associated with said captured images, and - the comparison means (34) are able to compare at least one context information of the overflown portion of land generated by the generation means context information with at least one stored context information of at least the previous captured image to determine the recovery rate of the portion of the terrain overflown with respect to at least the previous captured image. [4" id="c-fr-0004] 4. Camera block according to one of claims 2 and 3, characterized in that the comparison means make a comparison with geolocation information and / or the speed of movement of the camera block and / or the angle of view of the camera. camera and / or the orientation of the camera and / or the distance between the camera and the ground. [5" id="c-fr-0005] 5. Camera block according to the preceding claim, characterized in that it further comprises - a device for estimating the altitude of said camera block (38) and - means for storing the initial distance between the camera and the ground determined by said altitude estimation device before take-off of the drone, and in that the distance between the camera and the ground of the portion of terrain overflown is determined by the difference between the initial distance and the distance determined by the device for estimating the altitude of said camera block during the overflight of the portion of the terrain. [6" id="c-fr-0006] 6. Camera unit according to the preceding claim, characterized in that it further comprises an image analysis device of the camera capable of producing a horizontal speed signal derived from an analysis of the speed of the displacement of the camera. portion of land captured from one image to the next, and in that the distance between the camera and the ground is further function of said horizontal speed signal. [7" id="c-fr-0007] 7. Camera block according to one of the preceding claims, characterized in that the means of sending a command operate a sending of a command to the camera as soon as the recovery rate is at most 85% and preferentially at most 50%. [8" id="c-fr-0008] 8. Drone (10) for mapping a terrain (16) comprising a camera block according to any one of claims 1 to 7. [9" id="c-fr-0009] 9. A method for managing image capture by a camera block adapted to be embedded in a drone (10) for mapping a terrain (16), the camera block comprising a camera (18) capable of capturing successive images of portions of the terrain overflown by the drone, characterized in that the method comprises the following steps: - a step of comparing (72) information of the portion of terrain overflown visible via the camera with at least one information of at least the image previous sensed to determine the recovery rate of the portion of the terrain overflown with respect to at least said previous image captured, and - a step of sending a command (76) to the camera to capture an image, as soon as the recovery rate of the portion of the terrain overflown is less than or equal to the predetermined recovery rate. [10" id="c-fr-0010] 10. Method according to the preceding claim, characterized in that said information corresponds to at least one capture context information of a portion of land. [11" id="c-fr-0011] 11. Method according to the preceding claim, characterized in that the method further comprises a step of generating (70) at least one context information of the portion of land overflown, and in that - the step of comparing ( 72) compares said at least one context information of the land portion overflown with at least one context information stored in at least the previous captured image to determine the recovery rate of the portion of the terrain overflown relative to at least said previous image captured. [12" id="c-fr-0012] The method according to claim 10 or claim 11, characterized in that the method further comprises a step of storing (78) the captured image and its context information generated during the generation step (70). at least one context information of the land portion overflown. [13" id="c-fr-0013] 13. Method according to one of claims 10 to 12, characterized in that the context information comprises geolocation information and / or the speed of movement of the camera block and / or the angle of view of the camera and / or the orientation of the camera and / or the distance between the camera and the ground. [14" id="c-fr-0014] 14. Method according to the preceding claim, characterized in that it further comprises before the take-off of the drone, a step of determining the initial distance between the camera and the ground by estimating the altitude of said camera block, and during the flight of the drone, at least one step of determining the distance between the camera and the ground of the portion of terrain overflown by difference between the initial distance and the estimated distance of the altitude of said camera block. [15" id="c-fr-0015] 15. The method as claimed in claim 13, characterized in that it furthermore comprises a step of analyzing images of the camera to produce a horizontal speed signal, derived from an analysis of the displacement of the portion of terrain captured from the camera. an image to the next, and a step of determining the distance between the camera and the ground according to said horizontal speed signal. [16" id="c-fr-0016] 16. Method according to one of claims 9 to 15, characterized in that the predetermined recovery rate is at most 85% and preferably at most 50%.
类似技术:
公开号 | 公开日 | 专利标题 EP3112803A1|2017-01-04|Camera unit capable of being installed in a drone for mapping a terrain and method for managing the collection of images by a camera unit EP3142353B1|2019-12-18|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude EP3076258B1|2018-08-01|Drone provided with a video camera with compensated vertical focussing of instantaneous rotations for estimating horizontal speeds EP3217658A1|2017-09-13|Method for encoding and decoding video of a drone, and devices thereof EP2400460B1|2012-10-03|Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot EP3142356A1|2017-03-15|Method for determining an exposure time of a camera mounted on a drone, and associated drone US20170336203A1|2017-11-23|Methods and systems for remote sensing with drones and mounted sensor devices EP3276591A1|2018-01-31|Drone with an obstacle avoiding system WO2006129003A2|2006-12-07|Method and device for locating a terminal in a wireless local area network FR3000813A1|2014-07-11|Rotary wing drone i.e. quadricopter, has image analysis unit implementing Kalman filter estimator having representation of dynamic model of drone, with input of horizontal speed, position, linear acceleration, rotation and altitude signals EP3273318B1|2021-07-14|Autonomous system for collecting moving images by a drone with target tracking and improved target positioning EP3278301B1|2021-08-11|Method of determining a direction of an object on the basis of an image of the object EP3388914A1|2018-10-17|Target tracking method performed by a drone, related computer program, electronic system and drone EP2724203B1|2017-09-13|Generation of map data FR3054334A1|2018-01-26|AUTONOMOUS ANIMATED VIEWING SYSTEM COMPRISING A DRONE AND A GROUND STATION, AND ASSOCIATED METHOD. WO2014146884A1|2014-09-25|Method for observing an area by means of a drone FR3087134A1|2020-04-17|OBSTACLE DETECTION ASSEMBLY FOR DRONE, DRONE HAVING SUCH AN OBSTACLE DETECTION ASSEMBLY, AND OBSTACLE DETECTION METHOD EP2517152B1|2015-01-21|Method of object classification in an image observation system EP3168645A1|2017-05-17|Loading of ephemeris data in a drone EP3620852A1|2020-03-11|Method of capturing aerial images of a geographical area, method for three-dimensional mapping of a geographical area and aircraft for implementing such methods KR102052953B1|2019-12-09|Network type mobile sensor data recording device FR3067841B1|2019-07-05|SYSTEM AND METHOD FOR LOCATING IMAGE PROCESSING FR3082013A1|2019-12-06|ELECTRONIC DEVICE FOR DRIVING A DRONE, DRONE, DRIVING METHOD AND COMPUTER PROGRAM THEREOF FR3089046A1|2020-05-29|System for tracking the movement of at least one monitored mobile object, and corresponding method FR3078399A1|2019-08-30|METHOD FOR SELECTING A RESTRICTED OR EMPTY ASSEMBLY OF POSSIBLE POSITIVE POSITIONS OF A VEHICLE
同族专利:
公开号 | 公开日 FR3038482B1|2017-08-11| US20170006263A1|2017-01-05| JP2017015704A|2017-01-19| EP3112803A1|2017-01-04| CN106403892A|2017-02-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP2527787A2|2011-05-23|2012-11-28|Kabushiki Kaisha Topcon|Aerial photograph image pickup method and aerial photograph image pickup apparatus| US20130135440A1|2011-11-24|2013-05-30|Kabushiki Kaisha Topcon|Aerial Photograph Image Pickup Method And Aerial Photograph Image Pickup Apparatus| US20140267590A1|2013-03-15|2014-09-18|Iain Richard Tyrone McCLATCHIE|Diagonal Collection of Oblique Imagery| US9046759B1|2014-06-20|2015-06-02|nearmap australia pty ltd.|Compact multi-resolution aerial camera system| US5835137A|1995-06-21|1998-11-10|Eastman Kodak Company|Method and system for compensating for motion during imaging| US5798786A|1996-05-07|1998-08-25|Recon/Optical, Inc.|Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions| DE19822249A1|1998-05-18|1999-11-25|Honeywell Ag|Method for scanning images from the air| US20040257441A1|2001-08-29|2004-12-23|Geovantage, Inc.|Digital imaging system for airborne applications| US20030048357A1|2001-08-29|2003-03-13|Geovantage, Inc.|Digital imaging system for airborne applications| JP2007304407A|2006-05-12|2007-11-22|Alpine Electronics Inc|Automatic exposure device and method for vehicle-mounted camera| JP4470926B2|2006-08-08|2010-06-02|国際航業株式会社|Aerial photo image data set and its creation and display methods| JP4970296B2|2008-01-21|2012-07-04|株式会社パスコ|Orthophoto image generation method and photographing apparatus| US8497905B2|2008-04-11|2013-07-30|nearmap australia pty ltd.|Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features| CA2771286C|2009-08-11|2016-08-30|Certusview Technologies, Llc|Locating equipment communicatively coupled to or equipped with a mobile/portable device| US9208612B2|2010-02-12|2015-12-08|The University Of North Carolina At Chapel Hill|Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information| JP5775354B2|2011-04-28|2015-09-09|株式会社トプコン|Takeoff and landing target device and automatic takeoff and landing system| FR2988618B1|2012-03-30|2014-05-09|Parrot|ALTITUDE ESTIMER FOR MULTI-ROTOR ROTOR SAIL DRONE| JP6122591B2|2012-08-24|2017-04-26|株式会社トプコン|Photogrammetry camera and aerial photography equipment| JP6055274B2|2012-10-31|2016-12-27|株式会社トプコン|Aerial photograph measuring method and aerial photograph measuring system| US9643722B1|2014-02-28|2017-05-09|Lucas J. Myslinski|Drone device security system| US9052571B1|2014-06-20|2015-06-09|nearmap australia pty ltd.|Wide-area aerial camera systems| US9635276B2|2015-06-10|2017-04-25|Microsoft Technology Licensing, Llc|Determination of exposure time for an image frame| FR3041135B1|2015-09-10|2017-09-29|Parrot|DRONE WITH FRONTAL CAMERA WITH SEGMENTATION OF IMAGE OF THE SKY FOR THE CONTROL OF AUTOEXPOSITION| FR3041134B1|2015-09-10|2017-09-29|Parrot|DRONE WITH FRONTAL VIEW CAMERA WHOSE PARAMETERS OF CONTROL, IN PARTICULAR SELF-EXPOSURE, ARE MADE INDEPENDENT OF THE ATTITUDE.| FR3041136A1|2015-09-14|2017-03-17|Parrot|METHOD FOR DETERMINING EXHIBITION DURATION OF AN ONBOARD CAMERA ON A DRONE, AND ASSOCIATED DRONE| US10152828B2|2015-09-30|2018-12-11|Umap AV Corp.|Generating scene reconstructions from images|FR3052556B1|2016-06-13|2018-07-06|Parrot Drones|IMAGING ASSEMBLY FOR DRONE AND SYSTEM COMPRISING SUCH AN ASSEMBLY MOUNTED ON A FLYING DRONE| JP2018122701A|2017-01-31|2018-08-09|株式会社シマノ|Power transmission mechanism for bicycle| US11061155B2|2017-06-08|2021-07-13|Total Sa|Method of dropping a plurality of probes intended to partially penetrate into a ground using a vegetation detection, and related system| US10557936B2|2017-06-30|2020-02-11|Gopro, Inc.|Target value detection for unmanned aerial vehicles| JP6987557B2|2017-07-19|2022-01-05|株式会社熊谷組|Construction status acquisition method| JP6761786B2|2017-08-10|2020-09-30|本田技研工業株式会社|Ceiling map creation method, ceiling map creation device and ceiling map creation program| US10922817B2|2018-09-28|2021-02-16|Intel Corporation|Perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking| KR102117641B1|2018-11-19|2020-06-02|네이버시스템|Apparatus and method for aerial photographing to generate three-dimensional modeling and orthoimage| CN109556578A|2018-12-06|2019-04-02|成都天睿特科技有限公司|A kind of unmanned plane spirally sweeping measurement image pickup method|
法律状态:
2016-06-23| PLFP| Fee payment|Year of fee payment: 2 | 2017-01-06| PLSC| Publication of the preliminary search report|Effective date: 20170106 | 2017-06-16| PLFP| Fee payment|Year of fee payment: 3 | 2017-07-21| TP| Transmission of property|Owner name: PARROT DRONES, FR Effective date: 20170616 | 2018-06-15| PLFP| Fee payment|Year of fee payment: 4 | 2020-03-13| ST| Notification of lapse|Effective date: 20200206 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1556143A|FR3038482B1|2015-06-30|2015-06-30|CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE BY A CAMERA BLOCK|FR1556143A| FR3038482B1|2015-06-30|2015-06-30|CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE BY A CAMERA BLOCK| US15/189,676| US20170006263A1|2015-06-30|2016-06-22|Camera unit adapted to be placed on board a drone to map a land and a method of image capture management by a camera unit| EP16175817.2A| EP3112803A1|2015-06-30|2016-06-22|Camera unit capable of being installed in a drone for mapping a terrain and method for managing the collection of images by a camera unit| JP2016127373A| JP2017015704A|2015-06-30|2016-06-28|Camera unit adapted to be mounted on drone to map land, and image pickup management method by camera unit| CN201610710704.7A| CN106403892A|2015-06-30|2016-06-29|Camera unit adapted to be placed on board a drone to map a land and a method of image capture management by a camera unit| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|